21 research outputs found

    Learning to Detect Violent Videos using Convolutional Long Short-Term Memory

    Full text link
    Developing a technique for the automatic analysis of surveillance videos in order to identify the presence of violence is of broad interest. In this work, we propose a deep neural network for the purpose of recognizing violent videos. A convolutional neural network is used to extract frame level features from a video. The frame level features are then aggregated using a variant of the long short term memory that uses convolutional gates. The convolutional neural network along with the convolutional long short term memory is capable of capturing localized spatio-temporal features which enables the analysis of local motion taking place in the video. We also propose to use adjacent frame differences as the input to the model thereby forcing it to encode the changes occurring in the video. The performance of the proposed feature extraction pipeline is evaluated on three standard benchmark datasets in terms of recognition accuracy. Comparison of the results obtained with the state of the art techniques revealed the promising capability of the proposed method in recognizing violent videos.Comment: Accepted in International Conference on Advanced Video and Signal based Surveillance(AVSS 2017

    LSTA: Long Short-Term Attention for Egocentric Action Recognition

    Get PDF
    Egocentric activity recognition is one of the most challenging tasks in video analysis. It requires a fine-grained discrimination of small objects and their manipulation. While some methods base on strong supervision and attention mechanisms, they are either annotation consuming or do not take spatio-temporal patterns into account. In this paper we propose LSTA as a mechanism to focus on features from spatial relevant parts while attention is being tracked smoothly across the video sequence. We demonstrate the effectiveness of LSTA on egocentric activity recognition with an end-to-end trainable two-stream architecture, achieving state of the art performance on four standard benchmarks.Comment: Accepted to CVPR 201

    Attention is All We Need: Nailing Down Object-centric Attention for Egocentric Activity Recognition

    Full text link
    In this paper we propose an end-to-end trainable deep neural network model for egocentric activity recognition. Our model is built on the observation that egocentric activities are highly characterized by the objects and their locations in the video. Based on this, we develop a spatial attention mechanism that enables the network to attend to regions containing objects that are correlated with the activity under consideration. We learn highly specialized attention maps for each frame using class-specific activations from a CNN pre-trained for generic image recognition, and use them for spatio-temporal encoding of the video with a convolutional LSTM. Our model is trained in a weakly supervised setting using raw video-level activity-class labels. Nonetheless, on standard egocentric activity benchmarks our model surpasses by up to +6% points recognition accuracy the currently best performing method that leverages hand segmentation and object location strong supervision for training. We visually analyze attention maps generated by the network, revealing that the network successfully identifies the relevant objects present in the video frames which may explain the strong recognition performance. We also discuss an extensive ablation analysis regarding the design choices.Comment: Accepted to BMVC 201

    Convolutional Long Short-Term Memory Networks for Recognizing First Person Interactions

    Get PDF
    We present a novel deep learning approach for addressing the problem of interaction recognition from a first person perspective. The approach uses a pair of convolutional neural networks, whose parameters are shared, for extracting frame level features from successive frames of the video. The frame level features are then aggregated using a convolutional long short-term memory. The final hidden state of the convolutional long short-term memory is used for classification in to the respective categories. In our network the spatio-temporal structure of the input is preserved till the very final processing stage. Experimental results show that our method outperforms the state of the art on most recent first person interactions datasets that involve complex ego-motion. On UTKinect, it competes with methods that use depth image and skeletal joints information along with RGB images, while it surpasses previous methods that use only RGB images by more than 20% in recognition accuracy

    LSTA: Long Short-Term Attention for Egocentric Action Recognition

    Get PDF
    Egocentric activity recognition is one of the most challenging tasks in video analysis. It requires a fine-grained discrimination of small objects and their manipulation. While some methods base on strong supervision and attention mechanisms, they are either annotation consuming or do not take spatio-temporal patterns into account. In this paper we propose LSTA as a mechanism to focus on features from spatial relevant parts while attention is being tracked smoothly across the video sequence. We demonstrate the effectiveness of LSTA on egocentric activity recognition with an end-to-end trainable two-stream architecture, achieving state-of-the-art performance on four standard benchmarks

    Sparse distributed localized gradient fused features of objects

    No full text
    Abstract The sparse, hierarchical, and modular processing of natural signals is related to the ability of humans to recognize objects with high accuracy. In this study, we report a sparse feature processing and encoding method, which improved the recognition performance of an automated object recognition system. Randomly distributed localized gradient enhanced features were selected before employing aggregate functions for representation, where we used a modular and hierarchical approach to detect the object features. These object features were combined with a minimum distance classifier, thereby obtaining object recognition system accuracies of 93% using the Amsterdam library of object images (ALOI) database, 92% using the Columbia object image library (COIL)-100 database, and 69% using the PASCAL visual object challenge 2007 database. The object recognition performance was shown to be robust to variations in noise, object scaling, and object shifts. Finally, a comparison with eight existing object recognition methods indicated that our new method improved the recognition accuracy by 10% with ALOI, 8% with the COIL-100 database, and 10% with the PASCAL visual object challenge 2007 database
    corecore